A Regularized Newton-Like Method for Nonlinear PDE
نویسندگان
چکیده
منابع مشابه
A new Newton-like method for solving nonlinear equations
This paper presents an iterative scheme for solving nonline ar equations. We establish a new rational approximation model with linear numerator and denominator which has generalizes the local linear model. We then employ the new approximation for nonlinear equations and propose an improved Newton's method to solve it. The new method revises the Jacobian matrix by a rank one matrix each iteratio...
متن کاملTruncated regularized Newton method for convex minimizations
Recently, Li et al. (Comput. Optim. Appl. 26:131–147, 2004) proposed a regularized Newton method for convex minimization problems. The method retains local quadratic convergence property without requirement of the singularity of the Hessian. In this paper, we develop a truncated regularized Newton method and show its global convergence. We also establish a local quadratic convergence theorem fo...
متن کاملDistributed Newton Method for Regularized Logistic Regression
Regularized logistic regression is a very successful classification method, but for large-scale data, its distributed training has not been investigated much. In this work, we propose a distributed Newton method for training logistic regression. Many interesting techniques are discussed for reducing the communication cost. Experiments show that the proposed method is faster than state of the ar...
متن کاملRegularized Newton method for unconstrained convex optimization
We introduce the regularized Newton method (rnm) for unconstrained convex optimization. For any convex function, with a bounded optimal set, the rnm generates a sequence that converges to the optimal set from any starting point. Moreover the rnm requires neither strong convexity nor smoothness properties in the entire space. If the function is strongly convex and smooth enough in the neighborho...
متن کاملA Modified Regularized Newton Method for Unconstrained Nonconvex Optimization
In this paper, we present a modified regularized Newton method for the unconstrained nonconvex optimization by using trust region technique. We show that if the gradient and Hessian of the objective function are Lipschitz continuous, then the modified regularized Newton method (M-RNM) has a global convergence property. Numerical results show that the algorithm is very efficient.
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Numerical Functional Analysis and Optimization
سال: 2015
ISSN: 0163-0563,1532-2467
DOI: 10.1080/01630563.2015.1069328